34 research outputs found

    Value-Based Allocation of Docker Containers

    Get PDF
    Recently, an increasing number of public cloud vendors added Containers as a Service (CaaS) to their service portfolio. This is an adequate answer to the growing popularity of Docker, a software technology allowing Linux containers to run independently on a host in an isolated environment. As any software can be deployed in a container, the nature of containers differs and thus assorted allocation and orchestration approaches are needed for their effective execution. In this paper, we focus on containers whose execution value for end users varies over time. A baseline and two dynamic allocation algorithms are proposed and compared with the default Docker scheduling algorithm. Experiments show that the proposed approach can increase the total value obtained from a workload up to three times depending on the workload heaviness. It is also demonstrated that the algorithms scale well with the growing number of nodes in a cloud

    Feedback-Based Admission Control for Firm Real-Time Task Allocation with Dynamic Voltage and Frequency Scaling

    Get PDF
    Feedback-based mechanisms can be employed to monitor the performance of Multiprocessor Systems-on-Chips (MPSoCs) and steer the task execution even if the exact knowledge of the workload is unknown a priori. In particular, traditional proportional-integral controllers can be used with firm real-time tasks to either admit them to the processing cores or reject in order not to violate the timeliness of the already admitted tasks. During periods with a lower computational power demand, dynamic voltage and frequency scaling (DVFS) can be used to reduce the dissipation of energy in the cores while still not violating the tasks’ time constraints. Depending on the workload pattern and weight, platform size and the granularity of DVFS, energy savings can reach even 60% at the cost of a slight performance degradation

    Value-Based Allocation of Docker Containers

    Get PDF
    Recently, an increasing number of public cloud vendors added Containers as a Service (CaaS) to their service portfolio. This is an adequate answer to the growing popularity of Docker, a software technology allowing Linux containers to run independently on a host in an isolated environment. As any software can be deployed in a container, the nature of containers differs and thus assorted allocation and orchestration approaches are needed for their effective execution. In this paper, we focus on containers whose execution value for end users varies over time. A baseline and two dynamic allocation algorithms are proposed and compared with the default Docker scheduling algorithm. Experiments show that the proposed approach can increase the total value obtained from a workload up to three times depending on the workload heaviness. It is also demonstrated that the algorithms scale well with the growing number of nodes in a cloud

    Value and energy optimizing dynamic resource allocation in many-core HPC systems

    Get PDF
    The conventional approaches to reduce the energy consumption of high performance computing (HPC) data centers focus on consolidation and dynamic voltage and frequency scaling (DVFS). Most of these approaches consider independent tasks (or jobs) and do not jointly optimize for energy and value. In this paper, we propose DVFS-aware profiling and non-profiling based approaches that use design-time profiling results and perform all the computations at run-time, respectively. The profiling based approach is suitable for the scenarios when the jobs or their structure is known at design-time, otherwise, the non-profiling based approach is more suitable. Both the approaches consider jobs containing dependent tasks and exploit efficient allocation combined with identification of voltage/frequency levels of used system cores to jointly optimize value and energy. Experiments show that the proposed approaches reduce energy consumption by 15% when compared to existing approaches while achieving significant amount of value and reducing percentage of rejected jobs leading to zero value

    Value-Based Manufacturing Optimisation in Serverless Clouds for Industry 4.0

    Get PDF
    There is increasing impetus towards Industry 4.0, a recently proposed roadmap for process automation across a broad spectrum of manufacturing industries. The proposed approach uses Evolutionary Computation to optimise real-world metrics. Features of the proposed approach are that it is generic (i.e. applicable across multiple problem domains) and decentralised, i.e. hosted remotely from the physical system upon which it operates. In particular, by virtue of being serverless, the project goal is that computation can be performed `just in time' in a scalable fashion. We describe a case study for value-based optimisation, applicable to a wide range of manufacturing processes. In particular, value is expressed in terms of Overall Equipment Effectiveness (OEE), grounded in monetary units. We propose a novel online stopping condition that takes into account the predicted utility of further computational effort. We apply this method to scheduling problems in the (max,+) algebra, and compare against a baseline stopping criterion with no prediction mechanism. Near optimal profit is obtained by the proposed approach, across multiple problem instances

    Value and energy aware adaptive resource allocation of soft real-time jobs on many-core HPC data centers

    Get PDF
    Modern high performance computing (HPC) data centers consume huge energy to operate them. Therefore, appropriate measures are required to reduce their energy consumption. Existing efforts for such measures focus on consolidation and dynamic voltage and frequency scaling (DVFS). However, most of them do not perform adaptive resource allocation for the executing dependent tasks (or jobs) in order to optimize both value and energy. The value is achieved by completing the execution of a job and it depends on the completion time. A high value is achieved if the job is completed before its deadline, otherwise a lower value. In this paper, we propose an adaptive resource allocation approach that uses design-time profiling results of jobs for efficient allocation and adaptation in order to optimize both value and energy while executing dependent tasks. The profiling results for each job are obtained by exploiting efficient allocation combined with identification of voltage/frequency levels of used system cores and used in adapting to different number of cores based on the monitored execution progress of the job and available cores. Experiments show that the proposed approach enhances the overall value by about 10% when compare to existing approaches while showing reduction in energy consumption and percentage of rejected jobs leading to zero value
    corecore